Vision Transformer已成为计算机视觉中的新范式,表现出出色的性能,同时还具有昂贵的计算成本。图像令牌修剪是VIT压缩的主要方法之一,这是因为相对于令牌数的复杂性是二次的,而许多仅包含背景区域的令牌并不能真正促进最终预测。现有作品要么依赖其他模块来评分单个令牌的重要性,要么为不同的输入实例实施固定比率修剪策略。在这项工作中,我们提出了一个自适应的稀疏令牌修剪框架,成本最低。我们的方法是基于可学习的阈值,并利用多头自我注意力来评估令牌信息,但几乎没有其他操作。具体而言,我们首先提出了廉价的注意力重点加权阶级注意力评分机制。然后,将可学习的参数插入VIT作为阈值,以区分信息令牌和不重要的令牌。通过比较令牌注意分数和阈值,我们可以从层次上丢弃无用的令牌,从而加速推理。可学习的阈值在预算感知培训中进行了优化,以平衡准确性和复杂性,并为不同的输入实例执行相应的修剪配置。广泛的实验证明了我们方法的有效性。例如,我们的方法将DEIT-S的吞吐量提高了50%,并且TOP-1的准确性仅下降了0.2%,这比以前的方法在准确性和延迟之间取得了更好的权衡。
translated by 谷歌翻译
组合多个传感器使机器人能够最大程度地提高其对环境的感知意识,并增强其对外部干扰的鲁棒性,对机器人导航至关重要。本文提出了可融合的基准测试,这是一个完整的多传感器数据集,具有多种移动机器人序列。本文提出了三项贡献。我们首先推进便携式和通用的多传感器套件,可提供丰富的感官测量值:10Hz激光镜点云,20Hz立体声框架图像,来自立体声事件相机的高速率和异步事件,来自IMU的200Hz惯性读数以及10Hz GPS信号。传感器已经在硬件中暂时同步。该设备轻巧,独立,并为移动机器人提供插件支持。其次,我们通过收集17个序列来构建数据集,该序列通过利用多个机器人平台进行数据收集来涵盖校园上各种环境。一些序列对现有的SLAM算法具有挑战性。第三,我们为将本地化和映射绩效评估提供了基础真理。我们还评估最新的大满贯方法并确定其局限性。该数据集将发布由原始传感器的设置,地面真相,校准数据和评估算法组成:https://ram-lab.com/file/site/site/multi-sensor-dataset。
translated by 谷歌翻译
我们为深神经网络引入了两个低位训练后训练量化(PTQ)方法,该方法满足硬件要求,并且不需要长期重新训练。两次量化的能力可以将通过量化和去除化引入的乘法转换为许多有效加速器采用的位移位。但是,两次量表因子的候选值较少,这会导致更多的舍入或剪辑错误。我们提出了一种新型的两个PTQ框架,称为RAPQ,该框架被动态调整了整个网络的两个尺度,而不是静态地确定它们一层。从理论上讲,它可以权衡整个网络的舍入错误和剪辑错误。同时,RAPQ中的重建方法基于每个单元的BN信息。对Imagenet的广泛实验证明了我们提出的方法的出色性能。没有铃铛和哨声,REPQ在RESNET-18和MOBILENETV2上的准确度可以达到65%和48%,分别具有INT2激活INT4的精度。我们是第一个为低位PTQ提出更受限制但对硬件友好型的两次量化方案的人,并证明它可以达到与SOTA PTQ方法几乎相同的准确性。该代码已发布。
translated by 谷歌翻译
高清(HD)地图可以为自动驾驶提供静态交通环境的精确几何和语义信息。道路边界是高清地图中包含的最重要的信息之一,因为它区分道路地区和越野地区,可以引导车辆在道路区域内驾驶。但它是劳动密集型的,以向城市规模提供高清地图的道路边界。为了启用自动高清映射注释,当前工作使用语义分割或迭代图,用于道路边界检测。然而,前者无法确保拓扑正确性,因为它在像素级别工作,而后者遭受效率低下和漂流问题。为了提供上述问题的解决方案,在这封信中,我们提出了一个新的系统被称为CSBoundary,以便在城市规模上自动检测高清地图注释的道路边界。我们的网络将作为输入空中图像补丁的输入,并直接从此图像中递送连续的道路边界图(即顶点和边缘)。要生成城市规模的道路边界图,我们将从所有图像修补程序缝制所获得的图形。我们的CSBoundary在公共基准数据集中进行了评估并进行了比较。结果表明了我们的优越感。伴随的演示视频可在我们的项目页面\ url {https:/sites.google.com/view/csbound/}处获得。
translated by 谷歌翻译
场景文本擦除旨在从场景图像中删除文本内容,而当前的最新文本擦除模型经过大规模合成数据的培训。尽管数据合成引擎可以提供大量注释的训练样本,但合成数据和现实世界数据之间存在差异。在本文中,我们在未标记的现实世界场景文本图像上采用自我审视来进行特征表示。一项新颖的借口任务旨在在图像变体的文本蒙版之间保持一致。我们设计了渐进式擦除网络,以删除剩余文本。场景文本通过利用中间生成的结果逐渐消除,这为随后的更高质量结果奠定了基础。实验表明,我们的方法显着改善了文本擦除任务的概括,并在公共基准上实现了最先进的性能。
translated by 谷歌翻译
文档布局分析(DLA)在信息提取和文档理解中起重要作用。目前,文件布局分析已达到里程碑成果,但是非曼哈顿的文件布局分析仍然是一项挑战。在本文中,我们提出了一种图像层建模方法来解决这一挑战。为了测量所提出的图像层建模方法,我们提出了一个名为FPD的手动标记的非曼哈顿布局细粒细分分段数据集。据我们所知,FPD是第一个手动标记的非曼哈顿布局细粒细分分段数据集。为了有效提取文档的细粒度特征,我们提出了一个名为L-E ^ 3Net的边缘嵌入网络。实验结果证明,我们提出的图像层建模方法可以更好地处理非曼哈顿布局的细粒度分段文件。
translated by 谷歌翻译
In this paper, we propose a robust 3D detector, named Cross Modal Transformer (CMT), for end-to-end 3D multi-modal detection. Without explicit view transformation, CMT takes the image and point clouds tokens as inputs and directly outputs accurate 3D bounding boxes. The spatial alignment of multi-modal tokens is performed implicitly, by encoding the 3D points into multi-modal features. The core design of CMT is quite simple while its performance is impressive. CMT obtains 73.0% NDS on nuScenes benchmark. Moreover, CMT has a strong robustness even if the LiDAR is missing. Code will be released at https://github.com/junjie18/CMT.
translated by 谷歌翻译
Dataset distillation has emerged as a prominent technique to improve data efficiency when training machine learning models. It encapsulates the knowledge from a large dataset into a smaller synthetic dataset. A model trained on this smaller distilled dataset can attain comparable performance to a model trained on the original training dataset. However, the existing dataset distillation techniques mainly aim at achieving the best trade-off between resource usage efficiency and model utility. The security risks stemming from them have not been explored. This study performs the first backdoor attack against the models trained on the data distilled by dataset distillation models in the image domain. Concretely, we inject triggers into the synthetic data during the distillation procedure rather than during the model training stage, where all previous attacks are performed. We propose two types of backdoor attacks, namely NAIVEATTACK and DOORPING. NAIVEATTACK simply adds triggers to the raw data at the initial distillation phase, while DOORPING iteratively updates the triggers during the entire distillation procedure. We conduct extensive evaluations on multiple datasets, architectures, and dataset distillation techniques. Empirical evaluation shows that NAIVEATTACK achieves decent attack success rate (ASR) scores in some cases, while DOORPING reaches higher ASR scores (close to 1.0) in all cases. Furthermore, we conduct a comprehensive ablation study to analyze the factors that may affect the attack performance. Finally, we evaluate multiple defense mechanisms against our backdoor attacks and show that our attacks can practically circumvent these defense mechanisms.
translated by 谷歌翻译
Few Shot Instance Segmentation (FSIS) requires models to detect and segment novel classes with limited several support examples. In this work, we explore a simple yet unified solution for FSIS as well as its incremental variants, and introduce a new framework named Reference Twice (RefT) to fully explore the relationship between support/query features based on a Transformer-like framework. Our key insights are two folds: Firstly, with the aid of support masks, we can generate dynamic class centers more appropriately to re-weight query features. Secondly, we find that support object queries have already encoded key factors after base training. In this way, the query features can be enhanced twice from two aspects, i.e., feature-level and instance-level. In particular, we firstly design a mask-based dynamic weighting module to enhance support features and then propose to link object queries for better calibration via cross-attention. After the above steps, the novel classes can be improved significantly over our strong baseline. Additionally, our new framework can be easily extended to incremental FSIS with minor modification. When benchmarking results on the COCO dataset for FSIS, gFSIS, and iFSIS settings, our method achieves a competitive performance compared to existing approaches across different shots, e.g., we boost nAP by noticeable +8.2/+9.4 over the current state-of-the-art FSIS method for 10/30-shot. We further demonstrate the superiority of our approach on Few Shot Object Detection. Code and model will be available.
translated by 谷歌翻译
This paper focuses on designing efficient models with low parameters and FLOPs for dense predictions. Even though CNN-based lightweight methods have achieved stunning results after years of research, trading-off model accuracy and constrained resources still need further improvements. This work rethinks the essential unity of efficient Inverted Residual Block in MobileNetv2 and effective Transformer in ViT, inductively abstracting a general concept of Meta-Mobile Block, and we argue that the specific instantiation is very important to model performance though sharing the same framework. Motivated by this phenomenon, we deduce a simple yet efficient modern \textbf{I}nverted \textbf{R}esidual \textbf{M}obile \textbf{B}lock (iRMB) for mobile applications, which absorbs CNN-like efficiency to model short-distance dependency and Transformer-like dynamic modeling capability to learn long-distance interactions. Furthermore, we design a ResNet-like 4-phase \textbf{E}fficient \textbf{MO}del (EMO) based only on a series of iRMBs for dense applications. Massive experiments on ImageNet-1K, COCO2017, and ADE20K benchmarks demonstrate the superiority of our EMO over state-of-the-art methods, \eg, our EMO-1M/2M/5M achieve 71.5, 75.1, and 78.4 Top-1 that surpass \textbf{SoTA} CNN-/Transformer-based models, while trading-off the model accuracy and efficiency well.
translated by 谷歌翻译